4,769 research outputs found

    Opportunistic infection as a cause of transient viremia in chronically infected HIV patients under treatment with HAART

    Full text link
    When highly active antiretroviral therapy is administered for long periods of time to HIV-1 infected patients, most patients achieve viral loads that are ``undetectable'' by standard assay (i.e., HIV-1 RNA <50 < 50 copies/ml). Yet despite exhibiting sustained viral loads below the level of detection, a number of these patients experience unexplained episodes of transient viremia or viral "blips". We propose here that transient activation of the immune system by opportunistic infection may explain these episodes of viremia. Indeed, immune activation by opportunistic infection may spur HIV replication, replenish viral reservoirs and contribute to accelerated disease progression. In order to investigate the effects of concurrent infection on chronically infected HIV patients under treatment with highly active antiretroviral therapy (HAART), we extend a simple dynamic model of the effects of vaccination on HIV infection [Jones and Perelson, JAIDS 31:369-377, 2002] to include growing pathogens. We then propose a more realistic model for immune cell expansion in the presence of pathogen, and include this in a set of competing models that allow low baseline viral loads in the presence of drug treatment. Programmed expansion of immune cells upon exposure to antigen is a feature not previously included in HIV models, and one that is especially important to consider when simulating an immune response to opportunistic infection. Using these models we show that viral blips with realistic duration and amplitude can be generated by concurrent infections in HAART treated patients.Comment: 30 pages, 9 figures, 1 table. Submitted to Bulletin of Mathematical Biolog

    Organising a daily visual diary using multifeature clustering

    Get PDF
    The SenseCam is a prototype device from Microsoft that facilitates automatic capture of images of a person's life by integrating a colour camera, storage media and multiple sensors into a small wearable device. However, efficient search methods are required to reduce the user's burden of sifting through the thousands of images that are captured per day. In this paper, we describe experiments using colour spatiogram and block-based cross-correlation image features in conjunction with accelerometer sensor readings to cluster a day's worth of data into meaningful events, allowing the user to quickly browse a day's captured images. Two different low-complexity algorithms are detailed and evaluated for SenseCam image clustering

    Measuring the impact of temporal context on video retrieval

    Get PDF
    In this paper we describe the findings from the K-Space interactive video search experiments in TRECVid 2007, which examined the effects of including temporal context in video retrieval. The traditional approach to presenting video search results is to maximise recall by offering a user as many potentially relevant shots as possible within a limited amount of time. ‘Context’-oriented systems opt to allocate a portion of theresults presentation space to providing additional contextual cues about the returned results. In video retrieval these cues often include temporal information such as a shot’s location within the overall video broadcast and/or its neighbouring shots. We developed two interfaces with identical retrieval functionality in order to measure the effects of such context on user performance. The first system had a ‘recall-oriented’ interface, where results from a query were presented as a ranked list of shots. The second was ‘contextoriented’, with results presented as a ranked list of broadcasts. 10 users participated in the experiments, of which 8 were novices and 2 experts. Participants completed a number of retrieval topics using both the recall-oriented and context-oriented systems

    TRECVid 2007 experiments at Dublin City University

    Get PDF
    In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2007. We submitted the following six automatic runs: • F A 1 DCU-TextOnly6: Baseline run using only ASR/MT text features. • F A 1 DCU-ImgBaseline4: Baseline visual expert only run, no ASR/MT used. Made use of query-time generation of retrieval expert coefficients for fusion. • F A 2 DCU-ImgOnlyEnt5: Automatic generation of retrieval expert coefficients for fusion at index time. • F A 2 DCU-imgOnlyEntHigh3: Combination of coefficient generation which combined the coefficients generated by the query-time approach, and the index-time approach, with greater weight given to the index-time coefficient. • F A 2 DCU-imgOnlyEntAuto2: As above, except that greater weight is given to the query-time coefficient that was generated. • F A 2 DCU-autoMixed1: Query-time expert coefficient generation that used both visual and text experts

    Semi-automatic semantic enrichment of raw sensor data

    Get PDF
    One of the more recent sources of large volumes of generated data is sensor devices, where dedicated sensing equipment is used to monitor events and happenings in a wide range of domains, including monitoring human biometrics. In recent trials to examine the effects that key moments in movies have on the human body, we fitted fitted with a number of biometric sensor devices and monitored them as they watched a range of dierent movies in groups. The purpose of these experiments was to examine the correlation between humans' highlights in movies as observed from biometric sensors, and highlights in the same movies as identified by our automatic movie analysis techniques. However,the problem with this type of experiment is that both the analysis of the video stream and the sensor data readings are not directly usable in their raw form because of the sheer volume of low-level data values generated both from the sensors and from the movie analysis. This work describes the semi-automated enrichment of both video analysis and sensor data and the mechanism used to query the data in both centralised environments, and in a peer-to-peer architecture when the number of sensor devices grows to large numbers. We present and validate a scalable means of semi-automating the semantic enrichment of sensor data, thereby providing a means of large-scale sensor management

    Exploiting context information to aid landmark detection in SenseCam images

    Get PDF
    In this paper, we describe an approach designed to exploit context information in order to aid the detection of landmark images from a large collection of photographs. The photographs were generated using Microsoft’s SenseCam, a device designed to passively record a visual diary and cover a typical day of the user wearing the camera. The proliferation of digital photos along with the associated problems of managing and organising these collections provide the background motivation for this work. We believe more ubiquitious cameras, such as SenseCam, will become the norm in the future and the management of the volume of data generated by such devices is a key issue. The goal of the work reported here is to use context information to assist in the detection of landmark images or sequences of images from the thousands of photos taken daily by SenseCam. We will achieve this by analysing the images using low-level MPEG-7 features along with metadata provided by SenseCam, followed by simple clustering to identify the landmark images

    An examination of a large visual lifelog

    Get PDF
    With lifelogging gaining in popularity, we examine the differences between visual lifelog photos and explicitly captured digital photos. We do this based on an examination of over a year of continuous visual lifelog capture and a collection of over ten thousand personal digital photos

    Using text search for personal photo collections with the MediAssist system

    Get PDF
    The MediAssist system enables organisation and searching of personal digital photo collections based on contextual information, content-based analysis and semi-automatic annotation. One mode of user interaction uses automatically extracted features to create text surrogates for photos, which enables text search of photo collections without manual annotation. Our evaluation shows that this text search facility is effective for known-item search
    • …
    corecore